4 research outputs found

    A COMPUTATIONAL TOOL TO EVALUATE THE SAMPLE SIZE IN MAP POSITIONAL ACCURACY

    Get PDF
    In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n), considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map) and a user risk (to accept a bad map). This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n) and calculate the associated risk. Then we changed the value of (n), using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future

    ALTIMETRY ASSESSMENT OF ASTER GDEM v2 AND SRTM v3 DIGITAL ELEVATION MODELS: A CASE STUDY IN URBAN AREA OF BELO HORIZONTE, MG, BRAZIL

    Get PDF
    This work is an altimetry evaluation study involving Digital Elevation Models ASTER GDEM version 2 and SRTM version 3. Both models are readily available free of charge, however as they are built from different remote sensing methods it is also expected that they present different data qualities. LIDAR data with 25 cm vertical accuracy were used as reference for assessment validation. The evaluation study, carried out in urbanized area, investigated the distribution of the residuals and the relationship between the observed errors with land slopeclasses. Remote sensing principles, quantitative statistical methods and the Cartographic Accuracy Standard of Digital Mapping Products (PEC-PCD) were considered. The results indicated strong positive linear correlation and the existence of a functional relationship between the evaluated models and the reference model. Residuals between -4.36 m and 3.11 m grouped 47.7% of samples corresponding to ASTER GDEM and 63.7% of samples corresponding to SRTM. In both evaluated models, Root Mean Square Error values increased with increasing of land slope. Considering 1: 50,000 mapping scale the PEC-PCD classification indicated class B standard for SRTM and class C for ASTER GDEM. In all analyzes, SRTM presented smaller altimetry errors compared to ASTER GDEM, except in areas with steep relief

    A COMPUTATIONAL TOOL TO EVALUATE THE SAMPLE SIZE IN MAP POSITIONAL ACCURACY

    No full text
    Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n), considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map) and a user risk (to accept a bad map). This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n) and calculate the associated risk. Then we changed the value of (n), using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future

    ALTIMETRY ASSESSMENT OF ASTER GDEM v2 AND SRTM v3 DIGITAL ELEVATION MODELS: A CASE STUDY IN URBAN AREA OF BELO HORIZONTE, MG, BRAZIL

    No full text
    Abstract: This work is an altimetry evaluation study involving Digital Elevation Models ASTER GDEM version 2 and SRTM version 3. Both models are readily available free of charge, however as they are built from different remote sensing methods it is also expected that they present different data qualities. LIDAR data with 25 cm vertical accuracy were used as reference for assessment validation. The evaluation study, carried out in urbanized area, investigated the distribution of the residuals and the relationship between the observed errors with land slope classes. Remote sensing principles, quantitative statistical methods and the Cartographic Accuracy Standard of Digital Mapping Products (PEC-PCD) were considered. The results indicated strong positive linear correlation and the existence of a functional relationship between the evaluated models and the reference model. Residuals between -4.36 m and 3.11 m grouped 47.7% of samples corresponding to ASTER GDEM and 63.7% of samples corresponding to SRTM. In both evaluated models, Root Mean Square Error values increased with increasing of land slope. Considering 1: 50,000 mapping scale the PEC-PCD classification indicated class B standard for SRTM and class C for ASTER GDEM. In all analyzes, SRTM presented smaller altimetry errors compared to ASTER GDEM, except in areas with steep relief
    corecore